POV-Ray : Newsgroups : povray.programming : DirectX9's HLSL & NVidia's Cg : DirectX9's HLSL & NVidia's Cg Server Time
5 Jul 2024 14:57:54 EDT (-0400)
  DirectX9's HLSL & NVidia's Cg  
From: Ryan Bennitt
Date: 1 Nov 2003 07:23:55
Message: <3fa3a5db@news.povray.org>
I was browsing the DirectX 9 documentation, in particular looking at the High
Level Shader Language (HLSL) specifications, and started thinking about the
potential of this language, and for that matter languages like NVidia's Cg
language. I was wondering if we would ever see raytracers written in
vertex/pixel shader languges like these. It strikes me that the latest graphics
cards are turning into small parallel supercomputers capable of performing not
only vector arithmatic very quickly, but also allowing program flow and control
and supporting custom data structures. Currently they are used purely to
transform vertexes into meshes, but I see no reason why they can't transform an
array of vectors and floats into spheres and calculate ray intersections and
texture/normal values. Is a raytracer not just a complicated vertex/pixel shader
program?

Graphics cards are effectively becoming as flexible as processors in terms of
functions they can perform. Plus, if a graphics card contains 4+ vertex/pixel
pipelines (these days pipeline is a bit of a misnomer coined for marketing
purposes, I reckon it actually should be processor), each of which is running at
around 500MHz, with access to a large 128MB+ shared memory running at similar
speed, what you've effectively got is a parallel computer with comparable
processing capability to the CPU. We all know raytracing is a highly parallel
process. Attempts at shared processing over a network prove this. Can we not
have a 'server' running on a single machine, serving batches of pixels to be
rendered on either the CPU or on a pipeline in the GPU as each requires them?

Now there are certain functions that it won't be able to perform (such as
parsing), but once a CPU has parsed the data it can copy the necessary data
structures into the graphics card's memory and both the CPU and GPU can start
rendering sets of pixels in the image.

The kind of architecture required to send batches of pixels to either the CPU or
GPU would of course facilitate network rendering too. If the CPU has to give
batches of pixels to itself or the GPU to render, and both have to ask for more
pixels when they finish a batch, then there's no reason why this can't be
applied to other computers on the network that, once given the scene files to
parse, can request batches of pixels for their own CPU/GPU.

The main question is this. Is a raytracer simply too complicated to compile/run
on a GPU?


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.